Miho SHINOHARA Yukina TAMURA Shinya MOCHIDUKI Hiroaki KUDO Mitsuho YAMADA
We investigated the function in the Lateral Geniculate Nucleus of avoidance behavior due to the inconsistency between binocular retinal images due to blue from vergence eye movement based on avoidance behavior caused by the inconsistency of binocular retinal images when watching the rim of a blue-yellow equiluminance column.
Miho SHINOHARA Reiko KOYAMA Shinya MOCHIDUKI Mitsuho YAMADA
We paid attention the amount of change for each resolution by specifying the gaze position of images, and measured accommodation and convergence eye movement when watching high-resolution images. Change of convergence angle and accommodation were like the actual depth composition in the image when images were presented in the high-resolution.
Nobuhiko WAGATSUMA Mika URABE Ko SAKAI
Figure-ground (FG) segregation has been considered as a fundamental step towards object recognition. We explored plausible mechanisms that estimate global figure-ground segregation from local image features by investigating the human visual system. Physiological studies have reported border-ownership (BO) selective neurons in V2 which signal the local direction of figure (DOF) along a border; however, how local BO signals contribute to global FG segregation has not been clarified. The BO and FG processing could be independent, dependent on each other, or inseparable. The investigation on the differences and similarities between the BO and FG judgements is important for exploring plausible mechanisms that enable global FG estimation from local clues. We performed psychophysical experiments that included two different tasks each of which focused on the judgement of either BO or FG. The perceptual judgments showed consistency between the BO and FG determination while a longer distance in gaze movement was observed in FG segregation than BO discrimination. These results suggest the involvement of distinct neural mechanism for local BO determination and global FG segregation.
Yuki KUROSAWA Shinya MOCHIDUKI Yuko HOSHINO Mitsuho YAMADA
We measured eye movements at gaze points while subjects performed calculation tasks and examined the relationship between the eye movements and fatigue and/or internal state of a subject by tasks. It was suggested that fatigue and/or internal state of a subject affected eye movements at gaze points and that we could measure them using eye movements at gaze points in real time.
Toyotaro TOKIMOTO Shintaro TOKIMOTO Kengo FUJII Shogo MORITA Hirotsugu YAMAMOTO
We propose a method to realize a subjective super-resolution on a high-speed LED display, which dynamically shows a set of four neighboring pixels on every LED pixel. We have experimentally confirmed the subjective super-resolution effect. This paper proposes a subjective super-resolution hypothesis in human visual system and reports simulation results with pseudo fixation eye movements.
Takahide OTOMO Shinya MOCHIDUKI Eriko ISHII Yuko HOSHINO Mitsuho YAMADA
We can enjoy various video contents such as movies in several ways. In this report, we show the effects of content differences on physiological parameters such as eye movements and CFF. This time we confirmed the difference in responses that after watching a movie. In addition, a consistent change that can infer that due to a movie was also indicated. Our results showed that content differences affect the parameters. This suggests the possibility that the influence of movie contents on the viewer can be evaluated by physiological parameters.
Tsuyoshi KUSHIMA Miyuki SUGANUMA Shinya MOCHIDUKI Mitsuho YAMADA
Over the last 10 years, tablets have spread to the point where we can now read electronic books (e-books) like paper books. There is a long history of studies of eye movement during reading. Remarkable results have been reported for reading experiments in which displayed letters are changed in conjunction with eye movement during reading. However, these studies were conducted in the 1970s, and it is difficult to judge the detailed descriptions of the experimental techniques and whether the display time was correctly controlled when changing letters. Here, we propose an experimental system to control the display information exactly, as well as the display time, and inspect the results of past reading research, with the aim of being at the forefront of reading research in the e-book era.
Shinya MOCHIDUKI Reina WATANABE Hideaki TAKAHIRA Mitsuho YAMADA
We measured head and eye movements while subjects viewed 4K high-definition images to clarify the influence of different viewing positions. Subjects viewed three images from nine viewing positions: three viewing distances x three viewing positions. Though heads rotated toward the center irrespective of viewing screen positions, they also tended to turn straight forward as the viewing distance became close to an image.
Selective visual attention is an integral mechanism of the human visual system that is often neglected when designing perceptually relevant image and video quality metrics. Disregarding attention mechanisms assumes that all distortions in the visual content impact equally on the overall quality perception, which is typically not the case. Over the past years we have performed several experiments to study the effect of visual attention on quality perception. In addition to gaining a deeper scientific understanding of this matter, we were also able to use this knowledge to further improve various quality prediction models. In this article, I review our work with the aim to increase awareness on the importance of visual attention mechanisms for the effective design of quality prediction models.
Hideaki TAKAHIRA Kei KIKUCHI Mitsuho YAMADA
We develop a system for comprehensively evaluating the gaze motions of a person operating a small electronic device such as a PDA or tablet computer. When people operate small electronic devices, they hold the device in their hand and gaze at it. Their hand movements while holding the device are considered part of the movement involved in operating the device. Our measurement system uses a video camera image taken from behind the subject as a substitute for the view camera of an eye-tracking recorder. With our new system, it is also possible to measure the subject's gaze superimposed on the view image by directly inputting the display screen from a small electronic terminal or other display. We converted the subjects' head and hand movements into eye movements and we calculated the gaze from these values; we transformed the gaze coordinates into view image coordinates and superimposed each gaze on the view image. We examined this hand movement in relation to gaze movement by simultaneously measuring the gaze movement and hand movement. We evaluated the accuracy of the new system by conducting several experiments. We first performed an experiment testing gaze movement as the summation of head and eye movements, and then we performed an experiment to test the system's accuracy for measuring hand movements. From the result of experiments, less than approx. 6.1° accuracy was acquired in the horizontal 120° range and the perpendicular 90° range, and we found that the hand motions converted into the angle equivalent to gaze movement could be detected with approx. 1.2° accuracy for 5° and 10° hand movements. When the subjects' hand moved forward, the results were changed into the angle equivalent to gaze movement by converting the distance between the terminal and the subjects' eyes.
Hideaki TAKAHIRA Ryouichi ISHIKAWA Kei KIKUCHI Tatsuya SHINKAWA Mitsuho YAMADA
We investigated subjects' gaze movement when reading E-books and compared it with that when reading traditional paper books. By examining the eye motion associated with the reader encountering new lines and new pages during reading, we found that each new line was completed with one saccade in both E-books and paper books, but E-books and paper books differed in saccade patterns when the reader encountered a new page. In E-books, a regular eye movement such as steady gaze to the next page's start position was repeated. In contrast, in paper books, there is no regularity in eye movement during this transition. It was shown that reading behavior is variable and depends on the individual.
Kei KIKUCHI Hideaki TAKAHIRA Ryouichi ISHIKAWA Eiki WAKAMATSU Tatsuya SHINKAWA Mitsuho YAMADA
We developed a device to measure gaze and hand movement in a natural setting such as while reading a book on a train or bus. We examined what kind of cooperation exists among the head, eye and hand movements while subjects were reading a book held in the hand.
Takashi NAGAMATSU Yukina IWAMOTO Ryuichi SUGANO Junzo KAMAHARA Naoki TANAKA Michiya YAMAMOTO
We have proposed a novel geometric model of the eye in order to avoid the problems faced while using the conventional spherical model of the cornea for three dimensional (3D) model-based gaze estimation. The proposed model models the eye, including the boundary region of the cornea, as a general surface of revolution about the optical axis of the eye. Furthermore, a method for calculating the point of gaze (POG) on the basis of our model has been proposed. A prototype system for estimating the POG was developed using this method. The average root mean square errors (RMSEs) of the proposed method were experimentally found to be smaller than those of the gaze estimation method that is based on a spherical model of the cornea.
Takashi NAGAMATSU Ryuichi SUGANO Yukina IWAMOTO Junzo KAMAHARA Naoki TANAKA
This paper presents a user-calibration-free method for estimating the point of gaze (POG). This method provides a fast and stable solution for realizing user-calibration-free gaze estimation more accurately than the conventional method that uses the optical axis of the eye as an approximation of the visual axis of the eye. The optical axis of the eye can be estimated by using two cameras and two light sources. This estimation is carried out by using a spherical model of the cornea. The point of intersection of the optical axis of the eye with the object that the user gazes at is termed POA. On the basis of an assumption that the visual axes of both eyes intersect on the object, the POG is approximately estimated using the binocular 3D eye model as the midpoint of the line joining the POAs of both eyes. Based on this method, we have developed a prototype system that comprises a 19″ display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600 mm from the display. The root-mean-square error (RMSE) of measurement of POG in the display screen coordinate system is 1.58.
Yusuke HORIE Yuta KAWAMURA Akiyuki SEITA Mitsuho YAMADA
The purpose of this study was to clarify whether viewers can perceive a digitally deteriorated image while pursuing a speedily moving digitally compressed image. We studied the perception characteristics of false contours among the various digital deteriorations for the four types of displays i.e. CRT, PDP, EL, LCD by changing the gradation levels and the speed of moving image as parameters. It is known that 8 bits is not high enough resolution for still images, and it is assumed that 8 bits is also not enough for an image moving at less than 5 deg/sec since the tracking accuracy of smooth pursuit eye movement (SPEM) is very high for a target moving at less than 5 deg/sec. Given these facts, we focused on images moving at more than 5 deg/sec. In our results, the images deteriorated by a false contour at a gradation level less than 32 were perceived by every subject at almost all velocities, from 5 degrees/sec to 30 degrees/sec, for all four types of displays we used. However, the perception rate drastically decreased when the gradation levels reached 64, with almost no subjects detecting deterioration for gradation levels more than 64 at any velocity. Compared to other displays, LCDs yielded relatively high recognition rates for gradation levels of 64, especially at lower velocities.
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
Hidetake UWANO Masahide NAKAMURA Akito MONDEN Ken-ichi MATSUMOTO
This paper proposes to use eye movements to characterize the performance of individuals in reviewing software documents. We design and implement a system called DRESREM, which measures and records eye movements of document reviewers. Based on the eye movements captured by eye tracking device, the system computes the line number of the document that the reviewer is currently looking at. The system can also record and play back how the eyes moved during the review process. To evaluate the effectiveness of the system we conducted an experiment to analyze 30 processes of source code review (6 programs, 5 subjects) using the system. As a result, we have identified a particular pattern, called scan, in the subject's eye movements. Quantitative analysis showed that reviewers who did not spend enough time on the scan took more time to find defects on average.
Junichi HORI Koji SAKANO Yoshiaki SAITOH
A communication support interface controlled by eye movements and voluntary eye blink has been developed for disabled individuals with motor paralysis who cannot speak. Horizontal and vertical electro-oculograms were measured using two surface electrodes attached above and beside the dominant eye and referring to an earlobe electrode and amplified with AC-coupling in order to reduce the unnecessary drift. Four directional cursor movements --up, down, right, and left-- and one selected operation were realized by logically combining the two detected channel signals based on threshold settings specific to the individual. Letter input experiments were conducted on a virtual screen keyboard. The method's usability was enhanced by minimizing the number of electrodes and applying training to both the subject and the device. As a result, an accuracy of 90.1 3.6% and a processing speed of 7.7 1.9 letters/min. were obtained using our method.
Kanae NAOI Koji NAKAMAE Hiromu FUJIOKA Takao IMAI Kazunori SEKINE Noriaki TAKEDA Takeshi KUBO
We have developed a three-dimensional eye movement simulator that simulates eye movement. The simulator allows us to extract the instantaneous eye movement rotation axes from clinical data sequences. It calculates the plane formed by rotation axes and displays it on an eyeball with rotation axes. It also extracts the innervations for eye muscles. The developed simulator is mainly programmed by a CG programming language, OpenGL. First, the simulator was applied to saccadic eye movement data in order to show the so-called Listing's plane on which all hypothetical rotation axes lie. Next, it was applied to clinical data sequences of two patients with benign paroxysmal positional vertigo (BPPV). Instantaneous actual rotation axes and innervations for eye muscle extracted from data sequences have special characteristics. These results are useful for the elucidation of the mechanism of vestibular symptoms, particularly vertigo.
Hiroyuki HOSHINO Shin'ichi KOJIMA Yuji UCHIYAMA Takero HONGO
Recently, information display equipment such as a navigation system has often come to be installed in a vehicle, and a variety of useful information has been offered to the driver by voice and images while driving. The necessity of improving safety when the driver receives such information has come to be stressed. As one of the means of solving this problem, we can develop a system that presents the driving and road conditions information such as a lane changing car to the driver by using a warning sound. The purpose of our study is to clarify the effectiveness of an auditory display that uses spatial sounds on such a system. An experiment for measuring the driver's reaction time and eye movements to LED lighting during actual driving has been carried out to investigate whether the spatial sound can quicken the driver's operation and decrease human error. We evaluated the effectiveness by two measures, average reaction time and the number of largely delayed reactions. We considered that the average reaction time corresponds to the quickness of the driver's operation, and the number of largely delayed reactions corresponds to the probability of human error. As a result of the experiment, the use of directional sound clearly showed better performance than the use of monaural sound and no sound in the number of largely delayed reactions. Moreover, we analyzed the factors involved in delay of the reaction by the results of eye movement measurements. Consequently, it has been found that directional sound can decrease the number of the largely delayed reactions, which lead to an accident during actual driving.